以前通过一个位置的历史轨迹可能有助于推断该位置当前代理的未来轨迹。尽管在高清图的指导下进行了轨迹预测的大大改善,但只有少数作品探讨了这种当地历史信息。在这项工作中,我们将这些信息重新引入了轨迹预测系统的新类型的输入数据:本地行为数据,我们将其概念化为特定于位置的历史轨迹的集合。局部行为数据有助于系统强调预测区域,并更好地了解静态地图对象对移动代理的影响。我们提出了一个新型的本地行为感知(LBA)预测框架,该框架通过从观察到的轨迹,高清图和局部行为数据中融合信息来提高预测准确性。同样,如果这种历史数据不足或不可用,我们采用了本地行为(LBF)预测框架,该框架采用了基于知识依据的架构来推断缺失数据的影响。广泛的实验表明,通过这两个框架升级现有方法可显着提高其性能。特别是,LBA框架将SOTA方法在Nuscenes数据集上的性能提高了至少14%的K = 1度量。
translated by 谷歌翻译
在多模式的多代理轨迹预测中,尚未完全解决两个主要挑战:1)如何测量相互作用模块引起的不确定性,从而导致多个试剂的预测轨迹之间引起相关性; 2)如何对多个预测进行排名并选择最佳预测轨迹。为了应对这些挑战,这项工作首先提出了一个新颖的概念,协作不确定性(CU),该概念模拟了互动模块引起的不确定性。然后,我们使用原始置换量等不确定性估计器来构建一般的CU感知回归框架,以完成回归和不确定性估计任务。此外,我们将提出的框架应用于当前的SOTA多代理多模式预测系统作为插件模块,该模块使SOTA系统能够达到1)估计多代理多模式轨迹预测任务的不确定性; 2)对多个预测进行排名,并根据估计的不确定性选择最佳预测。我们对合成数据集和两个公共大规模多代理轨迹预测基准进行了广泛的实验。实验表明:1)在合成数据集上,Cu-Aware回归框架允许模型适当地近似地面真相拉普拉斯分布; 2)在多代理轨迹预测基准上,Cu-Aware回归框架稳步帮助SOTA系统改善了其性能。特别是,提出的框架帮助Vectornet在Nuscenes数据集中所选最佳预测的最终位移误差方面提高了262 cm; 3)对于多机构多模式轨迹预测系统,预测不确定性与未来随机性呈正相关; 4)估计的CU值与代理之间的交互式信息高度相关。
translated by 谷歌翻译
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.
translated by 谷歌翻译
大多数自我监督的单眼深度估计方法侧重于驾驶场景。我们表明,这些方法概括了看不见的复杂室内场景,其中物体杂乱,在近场中被任意排列。为了获得更多的稳健性,我们提出了一种结构蒸馏方法来从预磨削的深度估计器中学习诀窍,该估计由于其在野外的混合数据集训练而产生的结构化但度量无话束深度。通过将蒸馏与自我监督的分支组合,从左右一致性学习指标,我们为通用室内场景获得结构化和公制深度,并实时推断。为了促进学习和评估,我们收集Simsin,一个数据集,与数千个环境和Unisin,一个数据集,该数据集包含了关于通用室内环境的大约500个真实扫描序列。我们在SIM-to-Real和实际设置中进行实验,并显示定性和定量的改进,以及使用我们的深度映射的下游应用程序。这项工作提供了完整的研究,涵盖方法,数据和应用程序。我们认为这项工作通过自我监督为实际室内深度估计奠定了坚实的基础。
translated by 谷歌翻译
激光器传感器的进步提供了支持3D场景了解的丰富的3D数据。然而,由于遮挡和信号未命中,LIDAR点云实际上是2.5D,因为它们仅覆盖部分底层形状,这对3D感知构成了根本挑战。为了解决挑战,我们提出了一种基于新的LIDAR的3D对象检测模型,被称为窗帘检测器(BTCDET)后面,该模型学习物体形状前沿并估计在点云中部分封闭(窗帘)的完整物体形状。 BTCDET首先识别受遮挡和信号未命中的影响的区域。在这些区域中,我们的模型预测了占用的概率,指示区域是否包含对象形状。与此概率图集成,BTCDET可以产生高质量的3D提案。最后,占用概率也集成到提案细化模块中以生成最终边界框。关于基蒂数据集的广泛实验和Waymo Open DataSet展示了BTCDET的有效性。特别是,对于Kitti基准测试的汽车和骑自行车者的3D检测,BTCDET通过显着的边缘超越所有公布的最先进的方法。代码已发布(https://github.com/xharlie/btcdet}(https://github.com/xharlie/btcdet)。
translated by 谷歌翻译
从单视图重建3D形状是一个长期的研究问题。在本文中,我们展示了深度隐式地面网络,其可以通过预测底层符号距离场来从2D图像产生高质量的细节的3D网格。除了利用全局图像特征之外,禁止2D图像上的每个3D点的投影位置,并从图像特征映射中提取本地特征。结合全球和局部特征显着提高了符合距离场预测的准确性,特别是对于富含细节的区域。据我们所知,伪装是一种不断捕获从单视图图像中存在于3D形状中存在的孔和薄结构等细节的方法。 Disn在从合成和真实图像重建的各种形状类别上实现最先进的单视性重建性能。代码可在https://github.com/xharlie/disn提供补充可以在https://xharlie.github.io/images/neUrips_2019_Supp.pdf中找到补充
translated by 谷歌翻译
We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive deep learning framework for 3D object instance segmentation on point clouds. SGPN uses a single network to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. Important to the effectiveness of SGPN is its novel representation of 3D instance segmentation results in the form of a similarity matrix that indicates the similarity between each pair of points in embedded feature space, thus producing an accurate grouping proposal for each point. Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results. We also demonstrate its flexibility by seamlessly incorporating 2D CNN features into the framework to boost performance.
translated by 谷歌翻译
The Makespan Scheduling problem is an extensively studied NP-hard problem, and its simplest version looks for an allocation approach for a set of jobs with deterministic processing times to two identical machines such that the makespan is minimized. However, in real life scenarios, the actual processing time of each job may be stochastic around the expected value with a variance, under the influence of external factors, and the actual processing times of these jobs may be correlated with covariances. Thus within this paper, we propose a chance-constrained version of the Makespan Scheduling problem and investigate the theoretical performance of the classical Randomized Local Search and (1+1) EA for it. More specifically, we first study two variants of the Chance-constrained Makespan Scheduling problem and their computational complexities, then separately analyze the expected runtime of the two algorithms to obtain an optimal solution or almost optimal solution to the instances of the two variants. In addition, we investigate the experimental performance of the two algorithms for the two variants.
translated by 谷歌翻译
Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the correct topology of the segmented structures. However, so far, none of the existing approaches achieve a spatially correct matching between the topological features of ground truth and prediction. In this work, we propose the first topologically and feature-wise accurate metric and loss function for supervised image segmentation, which we term Betti matching. We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute the Betti matching of images. We show that the Betti matching error is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error. Moreover, the differentiability of the Betti matching loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance. Our code is available in https://github.com/nstucki/Betti-matching.
translated by 谷歌翻译
Many visualization techniques have been created to help explain the behavior of convolutional neural networks (CNNs), but they largely consist of static diagrams that convey limited information. Interactive visualizations can provide more rich insights and allow users to more easily explore a model's behavior; however, they are typically not easily reusable and are specific to a particular model. We introduce Visual Feature Search, a novel interactive visualization that is generalizable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar CNN features. It supports searching through large image datasets with an efficient cache-based search implementation. We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on supervised, self-supervised, and human-edited CNNs. We also release a portable Python library and several IPython notebooks to enable researchers to easily use our tool in their own experiments. Our code can be found at https://github.com/lookingglasslab/VisualFeatureSearch.
translated by 谷歌翻译